Content Review
Review and analyze the case content divided by sections
Pipeline Steps
Overview Step 1: Contextual Framework Step 2: Normative Requirements Step 3: Temporal Dynamics
Generate Scenario (Coming Soon)
Use of Artificial Intelligence in Engineering Practice
Step 1 of 3
Case Content Analysis
Sections identified: 6
Facts
LLM-Optimized Format (Clean Text):
Facts: Engineer A, an environmental engineer with several years of experience and holding a professional engineering license, was retained by Client W to prepare a comprehensive report addressing the manufacture, use, and characteristics of an organic compound identified as an emerging contaminant of concern. This work required Engineer A to perform an analysis of groundwater monitoring data from a site Engineer A had been observing for over a year. In addition, Engineer A was tasked to develop engineering design documents (plans and specifications) for modifications to groundwater infrastructure at the same site. Engineer A is known for their strong technical expertise but is, personally, less confident in their technical writing. Previously, Engineer A had relied on guidance and quality assurance reviews by their mentor and supervisor, Engineer B, to refine report drafts. But Engineer B recently retired and was no longer available to Engineer A in a work capacity. Faced with the need to deliver both the report and the engineering design documents without the review by and mentorship from Engineer B, Engineer A opted to use open-sourced artificial intelligence (AI) software to create an initial draft of the necessary report and to use AI-assisted drafting tools to generate preliminary design documents. The AI drafting software was new to the market and Engineer A had no previous experience with the tool. The AI drafting software was also open-sourced. For the report, Engineer A gathered the relevant information provided by Client W and relied on the AI software to synthesize the information and generate an initial draft of the necessary report. Engineer A input the information gathered from Client W into the AI software, and, after a few refining prompts, received a first draft of the report generated by the AI software. Not being familiar with the full functionality of the AI software, including the accuracy and originality of AI-generated text, Engineer A conducted a thorough review of the report, cross-checking key facts against professional journal articles and verifying the phrasing by running search engine queries to ensure the content did not match any existing language. Engineer A also made minor adjustments to some of the wording to personalize the content. Engineer A did not cite their use of AI-software or its large language models, and submitted the draft report to Client W for review, including language to clearly identify that the supplied report was a draft, but applied their seal consistent with state law. For the engineering design documents, Engineer A entered the information gathered from Client W into the AI software and relied on the AI-assisted drafting tools to generate a preliminary design of the plans, including basic layouts and technical specifications. Engineer A completed a cursory review of the AI-generated plans and adjusted certain elements to align with site-specific conditions. Again, Engineer A did not cite the AI-assisted drafting tools they used to generate the engineering design documents. When Client W reviewed the draft report, Client W noted that the section analyzing the groundwater monitoring data would benefit from minor edits for grammar and clarity, but found the introduction discussing the contaminant’s manufacture, use, and characteristics to be exceptionally polished. The Client commented that the report read as if written by two different authors but was otherwise satisfactory. Client W, however, noticed several issues with the AI-generated design documents, including misaligned dimensions and an omission of key safety features required by local regulations. Client W raised concerns about the accuracy and reliability of the engineering design and instructed Engineer A to revise the plans, ensuring that all elements satisfied the necessary professional and regulatory standards.
Question
LLM-Optimized Format (Clean Text):
Question: 1. Was Engineer A’s use of AI to create the report text ethical, given that Engineer A thoroughly checked the report? 2. Was Engineer A’s use of AI-assisted drafting tools to create the engineering design documents ethical, given that Engineer A reviewed the design at a high level? 3. If the use of AI was acceptable, did Engineer A have an ethical obligation to disclose the use of AI in any form to the Client?
Conclusion
LLM-Optimized Format (Clean Text):
Conclusion: 1. Engineer A's use of AI in report writing was partly ethical, and partly unethical. Engineer A was competent and did thoroughly review and verify the AI-generated content, ensuring accuracy and compliance with professional standards. However, Engineer A did not obtain client permission to disclose private information, nor did Engineer A document required technical citations. Ethical use of AI to create the report text must satisfy all pertinent requirements. 2. The use of AI-assisted drafting tools by Engineer A was not unethical per se. However, Engineer A’s misuse of the tool, by failing to maintain Responsible Charge over the AI tool and its output before sealing the document and providing it to Client W, was unethical. 3. Similar to other software used in the design or detailing process, Engineer A has no professional or ethical obligation to disclose AI use to Client W (unless such disclosure is required under Engineer A’s contract with Client W). However, at the time of the BER’s review of this case there is no universal guideline mandating AI disclosure in engineering work. Ethical principles favor transparency when AI plays a substantial role in generating work products. To uphold ethical standards, engineers integrating AI into their practice should adopt rigorous verification processes and consider disclosing AI involvement when it plays a significant role in the final product.
Discussion
LLM-Optimized Format (Clean Text):
Discussion: The Board of Ethical Review (BER) has a long history of open and welcome advocacy for the introduction of new technologies in engineering work, so long as the technologies are used in such a way that the engineering is done professionally. Artificial intelligence (AI) language processing software and AI-assisted drafting tools are in this category. The AI issues examined in this case are characteristic of engineering practice and are discussed and analyzed accordingly. However, other ethical considerations may arise when applying AI in different engineering contexts such as engineering education or engineering research. The following discussion does not attempt to address any of the potential legal considerations that may arise in such circumstances. Almost 35 years ago, in BER Case 90-6 , the BER looked at a hypothetical involving an engineer’s use of computer assisted drafting and design tools. The BER was asked if it was ethical for an engineer to sign and seal documents prepared using such a system. The introductory paragraph in that case gives a nice summary of the issue and looks ahead to one of the questions we see in this case – use of AI. The case begins: In recent years, the engineering profession has been ‘revolutionized’ by exponential growth in new and innovative computer technological breakthroughs. None have been more dynamic than the evolution that transformed yesterday's manual design techniques to Computer Aided Design (CAD), thence to Computer Assisted Drafting and Design (CADD) and soon Artificial Intelligence [AI]. The BER considers the change to CAD to merely represent a drafting enhancement. The change to CADD provides the BER with concerns that require assurance that the professional engineer has the requisite background, education and training to be proficient with the dynamics of CADD including the limitations of current technology. As night follows day one can be assured that CADD utilized beyond its ability to serve as a valuable tool has a propensity to be utilized as a crutch or substitute for judgement. That translates to a scenario for potential liability. In BER Case 90-6 , the BER determined that it was ethical for an engineer to sign and seal documents that were created using a CADD system whether prepared by the engineer themselves or by other engineers working under their direction and control. The use of AI in engineering practice raises ethical considerations, particularly concerning competency, direction and control, respect for client privacy, and accurate and appropriate attribution. Culminating in the key question: Is using AI adding a new tool to an engineer’s toolbox, or is it something more? Fundamental Canon I.2 states that engineers “perform services only in areas of their competence” and Code section II.2.a states that engineers must “undertake assignments only when qualified by education or experience in the specific technical fields involved.” Here, Engineer A, as an experienced environmental engineer, was competent to analyze groundwater monitoring data and assess contaminant risks. Because Engineer A performed a thorough review, cross-checked key facts against professional sources, and made adjustments to the text, the final document remained under Engineer A’s direction and control, as required by Code section II.2.b, “[e]ngineers shall not affix their signatures to any plans or documents . . . not prepared under their direction and control.” Further, the use of AI to assist with writing does not inherently constitute deception. Engineer A did not misrepresent their qualifications or technical expertise, nor did the AI-generated text contain inaccuracies. Fundamental Canon I.5 requires an Engineer to “avoid deceptive acts,” which was not violated here. Finally, Engineer A performed a thorough review and cross-checked the work on the report, much like Engineer A would have likely done if the report had been initially drafted by an engineer intern or other support staff. Per Code section II.1.c, confidential information can only be shared with prior consent of the Client. While careful review and checking of AI-generated content was consistent for ethical use, this does not end the inquiry into Engineer A’s actions. When Engineer A uploaded Client W’s information into the AI open-source interface, this was tantamount to placing the Client’s private information in the public domain. The facts here do not indicate Engineer A obtained permission from Client W to use the private information in the public domain. Similarly, the facts do not indicate the AI-generated report included citations of pertinent documents of technical authority. Per Code section III.9, engineers are required to “give credit for engineering work to those to whom credit is due,” so Engineer A’s ethical use of the AI software would need to include appropriate citations. Absent this professional level of care, diligence, and documentation, Engineer A’s use of the AI language processing software would be less than ethical. In addition to using AI to prepare the report, Engineer A also prepared draft design documents with a AI-assisted drafting tool that was new to the market. Engineer A elected to only conduct a high-level review and adjusted certain elements to align with site-specific conditions. When Client W reviewed the design documents, they found misaligned dimensions and key safety features (including those necessary for compliance with local regulations) were omitted. Turning to the omission of key safety features in the AI-generated plans, the BER looked at a similar situation previously. BER Case 98-3 discussed a solicitation by mail for engineers to use new technology to help gain more work. The solicitation read “Now - - thanks to a revolutionary new CD-ROM - specifying, designing and costing out any construction project is as easy as pointing and clicking your mouse - no matter your design experience. For instance, never designed a highway before? No problem. Just point to the ‘Highways’ window and click.” The engineer in BER Case 98-3 ordered the CD-ROM and began offering facilities design and construction services despite having no experience in this area or with the software. In its discussion in BER Case 98-3 , the BER reviewed several cases involving engineering competency, and concluded it would be unethical for an engineer to offer facilities design and construction services using a tool like this CD-ROM based on the facts presented in the case. They noted: In closing, the [BER]’s decision should not be understood as a wholesale rejection of the use of computers, CD-ROMs and other technological advances. Rather, it is the [BER]’s position that technology has an important place in the practice of engineering, but it must never be a replacement of a substitute for engineering judgment. Thus, Engineer A’s approach to reviewing AI-generated engineering designs presents greater ethical concerns than the ultimate use of AI for report writing. While AI-assisted drafting can be beneficial, the identified errors suggest insufficient review, which could compromise public welfare and impinge on Engineer A’s ethical and professional obligations. To begin, it is the BER’s view that under the facts, unlike the situation of BER Case 98-3 , Engineer A is not incompetent. The facts specifically note Engineer A has “several years of experience” and “strong technical expertise.” But the facts also note Engineer A appears to be operating in a compromised manner – namely, without the help of Engineer B – such that Engineer A relied on the AI-generated plans and specifications without proper oversight. Code section II.2.b states that, “[e]ngineers shall not affix their signatures to any plans or documents dealing with subject matter in which they lack competence, nor to any plan or document not prepared under their direction and control.” By relying on AI-assisted tools without a comprehensive verification process of its output, Engineer A risked violating this requirement. Furthermore, failure to detect misaligned dimensions and omitted safety features further indicates that Engineer A did not exercise sufficient diligence. The errors in the AI-generated design documents could have led to regulatory noncompliance and safety hazards, conflicting with the Fundamental Canon I.1, “hold paramount the safety, health, and welfare of the public”. Engineer A’s oversight of engineering plans was inadequate, raising ethical concerns. AI-generated technical work requires at least the same level of scrutiny as human-created work. Engineer A did not maintain responsible charge in violation of licensure law which violates Code section III.8.a. NSPE defines “Responsible Charge” in NSPE Position Statement No. 10-1778 as “being actively engaged in the engineering process, from conception to completion. Engineering decisions must be personally made by the professional engineer or by others over which the professional engineer provides supervisory direction and control authority. Reviewing drawings or documents after preparation without involvement in the design and development process does not satisfy the definition of Responsible Charge.” Engineer A, as the engineer in Responsible Charge of the project, is required to provide an experienced-based quality assurance review, engaging in critical discussions, mentorship, and professional development—elements that AI cannot replicate. The BER notes that in BER Case 98-3 , the BER stated that technology must not replace or be used as a substitute for engineering judgement. Much like directing an engineering intern to solve a problem, responsible use of AI requires an engineer to outline solution guidelines and constraints. Recommendations from the program or intern should not be blindly accepted, they should be considered and challenged and the resulting outputs should be understood. Only after the engineer in Responsible Charge has satisfied themselves that the proposed solution is in accordance with their own and professional standards should the design/report be accepted. These are steps that, in this case, Engineer A chose not to follow. While Engineer A reviewed the content, the lack of disclosure raises concerns about transparency. BER Case 98-3 emphasized that engineers must acknowledge significant contributions by others. AI, while not a human contributor, fundamentally shaped the report and design documents, warranting disclosure under Code section III.9, “[e]ngineers shall give credit for engineering work to those to whom credit is due, and will recognize the proprietary interests of others.” There are currently no universal guidelines mandating AI disclosure in engineering work, but best practices suggest informing clients when AI substantially contributes to a work product. Given that Client W identified issues in the engineering design and questioned inconsistencies in the report, proactive disclosure could have prevented misunderstandings and strengthened trust.
References
LLM-Optimized Format (Clean Text):
References: I.1.: Hold paramount the safety, health, and welfare of the public. I.2.: Perform services only in areas of their competence. I.5.: Avoid deceptive acts. III.3.: Engineers shall avoid all conduct or practice that deceives the public. III.9.: Engineers shall give credit for engineering work to those to whom credit is due, and will recognize the proprietary interests of others. II.1.c.: Engineers shall not reveal facts, data, or information without the prior consent of the client or employer except as authorized or required by law or this Code. II.2.a.: Engineers shall undertake assignments only when qualified by education or experience in the specific technical fields involved. II.2.b.: Engineers shall not affix their signatures to any plans or documents dealing with subject matter in which they lack competence, nor to any plan or document not prepared under their direction and control. III.8.a.: Engineers shall conform with state registration laws in the practice of engineering.
Dissenting Opinion
LLM-Optimized Format (Clean Text):
Dissenting Opinion: