Anne Marie Tavella headshot
ANNE MARIE TAVELLA
Partner, Davis Wright Tremaine, LLP
Matthew Gurr headshot
MATTHEW GURR
Associate, Davis Wright Tremaine, LLP
The Associated General Contractors of Alaska logo
Contractors & The Law
Striking a Balance
Rewards and risks of AI use on construction projects
G

enerative AI is rapidly transforming the construction industry by reshaping project planning, management, and performance. AI also comes with legal risks for contractors. This article explores the advantages and legal challenges of AI use in construction and outlines strategies to mitigate liability.

Benefits of AI Use on Construction Projects
AI is an increasingly common tool in construction. Contractors use a growing range of AI systems, with more technological developments likely to provide industry-wide benefits in the near future. Today, contractors rely on AI tools to streamline management tasks. AI is used to perform résumé review and background checks to vet candidates. AI programs are also used to review lengthy construction contracts, analyze risks, and draft project-specific clauses.

Design professionals use AI to generate project design documents. Developmental AI systems could potentially analyze designs for compliance with applicable law. AI programs are used to create procurement plans. Some programs could be used to analyze market data and effectively time purchases, avoiding cost increases.

Additionally, onsite AI use is expanding. AI controlled drones and cameras are used to track progress and identify risks. AI automation can also remedy labor shortages. While uncommon, some contractors use AI driven machines to perform repetitive site tasks or certain types of construction in extreme climates.

While the potential benefits of AI grow, a limited understanding of AI’s drawbacks poses risks for contractors.

Legal Concerns
Contractors who fail to understand the risks of AI may face legal exposure. AI is only as reliable as the information from which it draws and can produce inaccurate outputs. Thus, blind reliance on AI tools is dangerous. Effective use of AI requires quality control measures and confirmation that AI use meets contract requirements.

Contractors using AI risk breaching contractual confidentiality. Most AI systems are operated by third parties without confidentiality obligations. As such, a confidentiality breach can occur even where the contractor has a confidentiality agreement with the system operator. Contractors should review confidentiality and cybersecurity clauses before using AI systems for contract data.

Further, AI systems learn by obtaining information. Consequently, AI companies seek control over AI system input data, including project, design, or other protected information. This can cause intellectual property disputes over project information. First, inputting design documents into an AI system could be a breach of contract or of intellectual property rights. Second, AI-generated designs subject to intellectual property protections may result in litigation over ownership, licensing, and usage rights in the design.

As AI evolves, there are also product liability concerns. System defects can injure personnel or damage property. The risk of harm increases when a contractor inadequately tests AI systems before use or does not follow system guidelines. Strong quality control systems are essential to ensure that AI tools function properly and to avoid liability.

Moreover, AI systems produce outcomes via proprietary algorithms. Systems provided by third-party vendors can lack transparency in how decisions are made, obstruct system oversight, and limit troubleshooting capabilities. Simply put, it may be difficult for contractors to understand AI decision-making, identify errors, or fix defects without vendor assistance.

Reducing Risk
Before using AI systems, contractors must confirm there is no contractual prohibition on AI use. If the contract is silent, contractors should review confidentiality and cybersecurity provisions to ensure such use is compliant. When permitted, contractors should tailor clauses for AI tools. This includes terms addressing the scope of use, human oversight of AI systems, data ownership, cybersecurity responsibilities, and contractor indemnity for defects. If prohibited, contractors must clarify usage limitations and adhere to the restrictions throughout the project’s lifecycle. General contractors should ensure that AI use by subcontractors is also contractually compliant.

Contractors should also implement cybersecurity measures for sensitive data inputted into, or generated by AI systems. Such measures include encryption protections, access controls, and system breach response protocols. Contractors should also establish internal policies for data storage and handling protected information.

Most importantly, contractors must provide human oversight of AI systems. Human oversight can limit operational error risks, confirm AI decision-making, and manage efficiency. Oversight should also confirm that systems operate within applicable laws. The main issues surrounding AI use stem from failure to use quality control processes to ensure AI tools provide accurate information and function properly.

The integration of AI into construction projects presents transformative benefits. AI also comes with risks. Through identification and management of such risks, contractors can maximize industry innovation and avoid liability.

Anne Marie Tavella is a partner in the Anchorage office of Davis Wright Tremaine LLP. Matthew Gurr is an associate in the Seattle office of Davis Wright Tremaine LLP. Both attorneys focus on government contracts, regulatory compliance, and construction litigation.