enerative AI is rapidly transforming the construction industry by reshaping project planning, management, and performance. AI also comes with legal risks for contractors. This article explores the advantages and legal challenges of AI use in construction and outlines strategies to mitigate liability.
Design professionals use AI to generate project design documents. Developmental AI systems could potentially analyze designs for compliance with applicable law. AI programs are used to create procurement plans. Some programs could be used to analyze market data and effectively time purchases, avoiding cost increases.
Additionally, onsite AI use is expanding. AI controlled drones and cameras are used to track progress and identify risks. AI automation can also remedy labor shortages. While uncommon, some contractors use AI driven machines to perform repetitive site tasks or certain types of construction in extreme climates.
While the potential benefits of AI grow, a limited understanding of AI’s drawbacks poses risks for contractors.
Contractors using AI risk breaching contractual confidentiality. Most AI systems are operated by third parties without confidentiality obligations. As such, a confidentiality breach can occur even where the contractor has a confidentiality agreement with the system operator. Contractors should review confidentiality and cybersecurity clauses before using AI systems for contract data.
Further, AI systems learn by obtaining information. Consequently, AI companies seek control over AI system input data, including project, design, or other protected information. This can cause intellectual property disputes over project information. First, inputting design documents into an AI system could be a breach of contract or of intellectual property rights. Second, AI-generated designs subject to intellectual property protections may result in litigation over ownership, licensing, and usage rights in the design.
As AI evolves, there are also product liability concerns. System defects can injure personnel or damage property. The risk of harm increases when a contractor inadequately tests AI systems before use or does not follow system guidelines. Strong quality control systems are essential to ensure that AI tools function properly and to avoid liability.
Moreover, AI systems produce outcomes via proprietary algorithms. Systems provided by third-party vendors can lack transparency in how decisions are made, obstruct system oversight, and limit troubleshooting capabilities. Simply put, it may be difficult for contractors to understand AI decision-making, identify errors, or fix defects without vendor assistance.
Contractors should also implement cybersecurity measures for sensitive data inputted into, or generated by AI systems. Such measures include encryption protections, access controls, and system breach response protocols. Contractors should also establish internal policies for data storage and handling protected information.
Most importantly, contractors must provide human oversight of AI systems. Human oversight can limit operational error risks, confirm AI decision-making, and manage efficiency. Oversight should also confirm that systems operate within applicable laws. The main issues surrounding AI use stem from failure to use quality control processes to ensure AI tools provide accurate information and function properly.
The integration of AI into construction projects presents transformative benefits. AI also comes with risks. Through identification and management of such risks, contractors can maximize industry innovation and avoid liability.
