1. Introduction and Purpose
At Tekai, we recognize the transformative potential of Artificial Intelligence (AI) in driving innovation, improving efficiency, and delivering value to our clients. As a software development house, we are committed to integrating AI technologies into our operations in a manner that is ethical, transparent, and compliant with industry standards and regulatory frameworks, including the European Union’s AI Act and General Data Protection Regulation (GDPR).
This AI Policy outlines our principles, guidelines, and practices for the responsible use of AI technologies by our employees, contractors, and partners. It serves as a foundational document to ensure that our use of AI aligns with our core values, client expectations, and legal obligations. By establishing clear standards for AI adoption, we aim to:
- Promote Fairness and Accountability: Ensure that AI systems are designed, developed, and deployed in a way that avoids bias, discrimination, and harm.
- Enhance Transparency: Provide clarity on how AI tools are used within our organization, ensuring that employees and clients understand the capabilities and limitations of these technologies.
- Protect Data Privacy and Security: Safeguard sensitive and proprietary information by adhering to strict data protection protocols and minimizing the risks associated with AI tools.
- Support Compliance: Align with applicable laws, regulations, and client-specific AI policies to maintain trust and integrity in our operations.
- Drive Innovation Responsibly: Foster a culture of innovation while ensuring that AI is used in ways that benefit our clients, employees, and society at large.
This policy applies to all AI-related activities, including but not limited to the use of machine learning models, generative AI tools (e.g., OpenAI’s ChatGPT, GitHub Copilot,...), and other AI-driven technologies. It is designed to complement our existing policies on data protection, intellectual property, and employee conduct.
By adhering to this policy, we demonstrate our readiness to collaborate with clients on AI-powered projects while upholding the highest standards of ethical and responsible AI use. Together, we can harness the power of AI to achieve shared success.
This Policy is governed by Tekai's CTO and CIO (referred as Tekai Management Team)
2. Scope and Applicability
The AI Policy is designed to provide clear guidance on the use of Artificial Intelligence (AI) technologies within Tekai. It applies to all individuals and entities (hereby referred as Personnels) involved in our operations, including:
- Employees: Full-time, part-time, and contract employees engaged in software development, project management, and other roles.
- Contractors and Freelancers: External professionals working on projects or tasks that involve the use of AI tools.
- Partners and Vendors: Third-party organizations collaborating with us on AI-related projects or providing AI technologies.
- Clients: While this policy primarily governs our internal practices, it ensures alignment with client-specific AI policies and expectations when working on their projects.
This policy covers the following types of AI technologies and applications:
- Generative AI Tools: Platforms like OpenAI’s ChatGPT, GitHub Copilot, and other AI-driven content generation tools.
- Machine Learning Models: Custom or pre-built models used for data analysis, predictive analytics, and decision-making.
- Automation Tools: AI-powered tools for task automation, such as code review, testing, and deployment processes.
- Data Analysis Tools: AI tools used for processing, interpreting, and visualizing data.
- Other AI Technologies: Any other AI-driven software or systems adopted by the company.
The policy applies to all stages of AI use, including but not limited to:
- Development: Designing, building, and training AI models or systems.
- Deployment: Implementing AI tools in internal processes or client projects.
- Evaluation: Monitoring and assessing the performance, accuracy, and impact of AI systems.
- Maintenance: Updating, refining, and retiring AI tools as needed.
Exclusions
This policy does not apply to:
- AI tools used exclusively by clients unless explicitly governed by contractual agreements.
- Research and development activities conducted in collaboration with academic or regulatory bodies, which may be subject to separate guidelines.
By defining the scope and applicability of this policy, we ensure that all stakeholders understand their roles and responsibilities in adopting AI technologies responsibly and in compliance with regulatory and client requirements.
3. General Principles
3.1 Ethical Use
Tekai is committed to using AI in a manner that upholds the highest ethical standards. This includes:
- Fairness: Ensuring that AI systems do not perpetuate bias, discrimination, or inequality. We will actively work to identify and mitigate biases in data, algorithms, and outcomes.
- Human-Centricity: Prioritizing the well-being and rights of individuals affected by AI systems. AI should augment human capabilities, not replace or undermine human judgment.
- Non-Harm: Avoiding the use of AI in ways that could cause physical, emotional, or psychological harm to individuals or communities.
3.2 Transparency
Transparency is essential to building trust in AI systems. At Tekai, we will:
- Explainability: Ensure that AI systems and their outputs are explainable and understandable to stakeholders, including employees, clients, and end-users.
- Open Communication: Clearly disclose when and how AI is being used in our processes or deliverables. We will avoid using AI in ways that could be perceived as deceptive or misleading.
- Documentation: Maintain thorough documentation of AI tools, their purposes, and their limitations to ensure accountability and traceability.
3.3 Accountability
We recognize the importance of accountability in AI decision-making and implementation. This includes:
- Responsibility: Assigning clear roles and responsibilities for the development, deployment, and oversight of AI systems.
- Risk Management: Proactively identifying and addressing risks associated with AI, including data privacy breaches, algorithmic errors, and unintended consequences.
- Compliance: Adhering to all applicable laws, regulations, and client-specific AI policies. We will regularly review and update our practices to remain compliant with evolving standards.
3.4 Innovation and Continuous Improvement
While adhering to ethical and regulatory standards, Tekai encourages innovation in the use of AI. We will:
- Foster Creativity: Support the exploration of cutting-edge AI technologies to drive innovation and deliver value to our clients.
- Continuous Learning: Invest in training and development programs to keep our team updated on the latest AI trends, tools, and best practices.
- Feedback Loops: Establish mechanisms for collecting and incorporating feedback from employees, clients, and other stakeholders to improve our AI systems and policies.
3.5 Sustainability
Tekai is committed to using AI in ways that promote environmental and social sustainability. This includes:
- Resource Efficiency: Optimizing the use of computational resources to minimize the environmental impact of AI systems.
- Social Responsibility: Ensuring that AI technologies benefit society as a whole and contribute to positive social outcomes.
4. Approved AI Tools
To ensure consistency, security, and compliance in the use of Artificial Intelligence (AI), Tekai maintains a curated list of approved AI tools and platforms. Personnels are required to use only these approved tools for AI-related tasks unless explicitly authorized otherwise.
4.1 List of Approved AI Tools
The following AI tools are approved for use at Tekai:
- GitHub Copilot
- OpenAI’s GPT, including GPT 3.5, GPT 4, GPT 4o, GPT o1
- Claude 3.5 (Sonnet, Haiku, Opus)
- Deepseek V3, R1 variants (Only use the versions that are hosted by non-Chinese servers)
- Gemini 1.0 and Gemini 2.0 variants
- Llama 3.1, Llama 3.2, Llama 3.3 variants
- OpenRouter aggregator, using the above mentioned AI tools
Additional tools may be approved on a case-by-case basis, subject to evaluation by the Tekai Management team.
4.2 Criteria for Approval
To be included in the list of approved tools, each AI platform must meet the following criteria:
- Security: Ensure robust data protection and encryption mechanisms to safeguard sensitive information.
- Compliance: Align with EU regulations, including GDPR and the AI Act, as well as client-specific requirements.
- Ethical Standards: Demonstrate a commitment to fairness, transparency, and non-discrimination in AI outputs.
- Integration: Seamlessly integrate with Tekai’s existing workflows, tools, and systems.
- Support and Maintenance: Provide reliable customer support and regular updates to address vulnerabilities and improve functionality.
4.3 Prohibited AI Tools
The use of unapproved AI tools (listed in section 4.1) is strictly prohibited. This includes:
- Public platforms that do not guarantee data privacy or security.
- Tools that have not been evaluated for compliance with Tekai’s ethical and regulatory standards.
- AI applications that are known to produce biased, harmful, or discriminatory outputs.
Exceptions may be made for specific projects or clients, but only with prior approval from the Tekai Management team.
4.4 Monitoring and Updates
Tekai’s Management team will:
- Regularly Review Tools: Assess the performance, security, and compliance of approved tools on an ongoing basis.
- Update the List: Add or remove tools from the approved list as new technologies emerge or regulatory requirements change.
- Address Issues: Investigate and resolve any concerns or incidents related to the use of approved AI tools.
5. Data Privacy and Security
At Tekai, we prioritize the protection of sensitive and proprietary data, ensuring that all AI-related activities comply with applicable data protection regulations, including the General Data Protection Regulation (GDPR) and other relevant frameworks. This section outlines our approach to safeguarding data when using AI tools and technologies.
- Restrict the input of sensitive or proprietary data into AI tools, especially public platforms.
- Align with GDPR and other EU data protection regulations.
- Reference existing confidentiality and data use policies.
5.1 Data Input Restrictions
To minimize risks, Personnels must adhere to the following guidelines when using AI tools:
- No Sensitive Data: Prohibit the input of sensitive or confidential information (e.g., client data, personal information, trade secrets) into public or unsecured AI platforms (e.g., ChatGPT, open-source tools).
- Anonymized Data: When using AI for data analysis, ensure that all data is anonymized or pseudonymized to protect individual identities.
- Data Minimization: Use only the minimum amount of data necessary to achieve the intended purpose of the AI application.
If any of the restrictions are violated, Personnels are required to report to Tekai's management team as specified in section 5.4
5.2 Guidelines for Tool Usage
When using approved AI tools, Personnels must adhere to the following guidelines:
- Data Input Restrictions: Listed out in section 5.1
- Output Validation: Review and validate all AI-generated outputs for accuracy, fairness, and relevance before use.
- Attribution and Transparency: Clearly indicate when AI has been used to create or modify content, and ensure compliance with intellectual property laws.
- Training and Familiarity: Complete required training on the proper use of approved AI tools to maximize their potential and minimize risks as specified in section 7
5.3 Compliance with GDPR and Other Regulations
Tekai is committed to complying with all relevant data protection regulations, including:
- GDPR Principles: Ensure that AI-related data processing adheres to GDPR principles, such as lawfulness, fairness, transparency, and data minimization.
- Cross-Border Data Transfers: Avoid transferring data to AI tools or platforms located in jurisdictions without adequate data protection standards.
- Data Subject Rights: Respect the rights of data subjects, including the right to access, rectify, and erase their data.
5.4 Reporting Data Breaches
Employees must immediately report any suspected or actual data breaches involving AI tools to Tekai's Management Team. Details of reporting instruction are listed out in Section 9. Tekai will:
- Investigate: Assess the scope and impact of the breach.
- Mitigate: Take steps to contain the breach and prevent further damage.
- Notify: Inform affected clients, stakeholders, and regulatory authorities as required by law.
6. Intellectual Property (IP) Protection
At Tekai, we recognize the importance of protecting intellectual property (IP) in all AI-related activities. This section outlines our approach to ensuring that AI tools are used in ways that respect and safeguard the IP rights of both Tekai and our clients. Our guidelines are designed to align with industry best practices and legal standards.
6.1 Ownership of AI-Generated Content
- Client Ownership: Any Result created by AI-generated content created specifically for a client project is governed in the Customer's own Frame Agreement.
- Internal Ownership: AI-generated content developed for internal use, such as process automation or training materials, is the property of Tekai, subject to applicable laws and contracts.
6.2 Attribution and Transparency
To maintain transparency and accountability, Personnels must:
- Disclose Use of AI: Clearly indicate when AI tools have been used to create or modify content, ensuring clients are aware of the role of AI in deliverables.
- Avoid Misrepresentation: Never present AI-generated content as entirely human-created unless explicitly authorized.
6.3 Compliance with IP Laws
Tekai adheres to all relevant IP laws and regulations, including those governing copyright, patents, and trade secrets. This includes:
- Licensing Requirements: Ensuring that all AI tools used by Tekai are properly licensed and comply with their terms of use.
- Prohibited Content: Avoiding the use of AI tools to generate content that infringes on third-party IP rights, such as copyrighted material or proprietary data.
6.4 Handling IP Disputes
In the event of an IP dispute related to AI-generated content, Tekai will:
- Investigate: Conduct a thorough review of the content creation process to determine the source of the dispute.
- Mitigate: Take corrective action, such as revising or removing the disputed content, to resolve the issue promptly.
- Collaborate: Work closely with clients and legal advisors to ensure a fair and compliant resolution.
6.5 Personnels Responsibilities
All personnels must adhere to the following guidelines to protect IP:
- Training: Specified in Section 7.
- Reporting: Specified in Section 9.
- Documentation: Maintain accurate records of AI tool usage and content creation processes to support IP claims and disputes.
7. Employee Training and Guidelines
At Tekai, we believe that empowering our employees with the knowledge and skills to use AI tools responsibly is essential for maintaining ethical standards, compliance, and operational efficiency. This section outlines our approach to training and providing clear guidelines for the use of AI technologies.
7.1 Training Programs
Tekai, from time to time, provides comprehensive training programs to ensure Personnels understand how to use AI tools effectively and responsibly. These programs include:
- AI Fundamentals: An introduction to AI concepts, tools, and their applications in software development and offshoring.
- Ethical AI Use: Training on ethical considerations, such as avoiding bias, ensuring transparency, and protecting data privacy.
- Tool-Specific Training: Hands-on sessions for approved AI tools (mentioned in section 4) to maximize their potential while minimizing risks.
- Compliance Training: Education on relevant regulations, such as GDPR and the EU AI Act, and how they apply to AI use at Tekai.
7.2 Responsible AI Practices
To promote responsible AI use, Personnels are encouraged to:
- Stay Informed: Keep up-to-date with the latest developments in AI technologies, regulations, and best practices.
- Seek Guidance: Consult with the AI Governance Committee or supervisors when unsure about the appropriate use of AI tools.
- Report Issues: Immediately report any concerns, errors, or ethical dilemmas related to AI use to the appropriate team.
7.3 Continuous Learning and Improvement
Tekai fosters a culture of continuous learning and improvement by:
- Feedback Mechanisms: Encouraging Personnels to provide feedback on AI tools and training programs to identify areas for improvement.
- Regular Updates: Updating training materials and guidelines to reflect new technologies, regulations, and client requirements.
- Knowledge Sharing: Promoting collaboration and knowledge sharing among Personnels to enhance collective understanding of AI.
7.4 Consequences of Non-Compliance
Personnels who fail to adhere to Tekai’s AI policies and guidelines may face disciplinary action, including:
- Warnings: For minor violations, such as improper use of AI tools without malicious intent.
- Training Requirements: Mandatory retraining for repeated or significant violations.
- Suspension or Termination: For severe breaches, such as intentional misuse of AI tools or violation of data privacy laws.
8. Monitoring and Governance
This section outlines our approach to monitoring AI systems, ensuring compliance, and maintaining transparency across all operations.
8.1 AI Governance Framework
Tekai has established an AI Governance Framework to oversee the development, deployment, and use of AI technologies. This framework includes:
- Clear Roles and Responsibilities: Defined roles for employees, project managers, and compliance officers to ensure accountability at every stage of AI use.
- Policy Enforcement: Mechanisms to enforce compliance with this AI Policy and address violations promptly.
8.2 Monitoring AI Systems
To ensure AI systems operate as intended and adhere to ethical standards, Tekai implements the following monitoring practices:
- Bias and Fairness Checks: Continuous evaluation of AI models to detect and mitigate biases, ensuring fairness in decision-making processes.
- Incident Reporting: A system for Personnels to report concerns, errors, or ethical issues related to AI use without fear of retaliation.
8.3 Risk Management
To address potential risks associated with AI, Tekai implements:
- Risk Assessments: Systematic evaluations of AI projects to identify and mitigate risks related to bias, privacy, security, and compliance.
- Contingency Plans: Strategies to address AI system failures, such as reverting to manual processes or deploying alternative solutions.
- Third-Party Evaluations: Engaging external experts to assess the safety and ethical implications of high-risk AI systems.
8.4 Continuous Improvement
Tekai is committed to evolving its AI governance practices to keep pace with technological advancements and regulatory changes. This includes:
- Feedback Loops: Incorporating employee, client, and stakeholder feedback to refine AI policies and practices.
- Regulatory Updates: Staying informed about changes in AI-related laws and regulations to ensure ongoing compliance.
- Innovation with Responsibility: Balancing innovation with ethical considerations to drive progress while safeguarding trust.
9. Incident Reporting and Escalation
This section outlines our approach to identifying, responding to, and mitigating incidents involving AI systems.
9.1 Definition of AI Incidents
An AI incident is any event that results in unintended or harmful consequences due to the use of AI tools or systems. Examples include:
- Data Breaches: Unauthorized access or leakage of sensitive data caused by AI systems.
- Bias or Discrimination: AI outputs that perpetuate bias, discrimination, or unfair treatment.
- System Failures: Errors or malfunctions in AI systems that lead to incorrect decisions or outputs.
- Non-Compliance: Violations of regulatory or client-specific AI policies.
9.2 Incident Response Process
Tekai follows a structured incident response process to address AI-related issues effectively:
- Detection and Reporting:
- Personnels must immediately report suspected or actual incidents to the AI Governance Committee or IT Security team.
- Anonymous reporting channels are available to encourage accountability without fear of retaliation.
- Assessment:
- The AI Governance Committee will assess the scope, severity, and potential impact of the incident.
- Immediate containment measures will be implemented to prevent further harm.
- Mitigation:
- Corrective actions will be taken to resolve the issue, such as revising AI models, updating processes, or disabling faulty systems.
- Affected systems will be thoroughly tested before being reintroduced into operations.
- Communication:
- Relevant stakeholders, including clients and regulatory authorities, will be informed of the incident and remediation efforts in a timely and transparent manner.
- Documentation:
- Detailed records of the incident, including root cause analysis and corrective actions, will be maintained for audit and accountability purposes.
9.3 Mitigation Strategies
To prevent future incidents, Tekai implements the following mitigation strategies:
- Ongoing Monitoring: Continuous oversight of AI systems to detect and address potential issues early.
- Training and Awareness: Regular training for Personnels on identifying and mitigating AI-related risks.
- Bias Audits: Periodic reviews of AI models to ensure fairness and eliminate discriminatory outputs.
- Backup Systems: Contingency plans, such as manual processes or alternative tools, to maintain operations during system failures.
9.4 Post-Incident Review
After resolving an incident, Tekai conducts a thorough post-incident review to:
- Analyze Root Causes: Identify the underlying factors that contributed to the incident.
- Improve Processes: Update policies, procedures, and AI systems to prevent recurrence.
- Share Learnings: Communicate lessons learned across the organization to enhance collective awareness and preparedness.
9.5 Client Notification
Tekai is committed to transparency with clients in the event of an incident. This includes:
- Timely Updates: Informing clients of the incident, its impact, and the steps taken to resolve it as soon as possible.
- Collaboration: Working closely with clients to address any concerns or mitigate impacts on their projects.
- Preventive Measures: Sharing preventive actions implemented to avoid similar incidents in the future.
10. Alignment with Client Policies
10.1 Commitment to Client AI Policies
Tekai is dedicated to adhering to the AI policies and guidelines set forth by our clients. This includes:
- Thorough Reviews: Carefully reviewing and understanding each client’s AI policies, including prohibited uses, data handling requirements, and accountability measures.
- Compliance Assurance: Ensuring that our AI practices, tools, and workflows fully comply with the stated policies of our clients, including frameworks like McKinsey’s Client Service Policy, which emphasizes avoiding unintended consequences and protecting vulnerable populations.
- Non-Negotiable Standards: Abstaining from any AI-related work that conflicts with a client’s policies, even if the work is unpaid or falls outside regulatory requirements.
10.2 Collaboration and Integration
Tekai actively collaborates with clients to integrate AI into their projects in a way that aligns with their specific needs and policies. This includes:
- Early Alignment: Engaging with clients at the outset of projects to discuss and align on AI-related objectives, constraints, and compliance requirements.
- Custom Solutions: Tailoring our AI strategies and tools to meet the unique needs and policy frameworks of each client.
- Transparent Communication: Maintaining open and ongoing dialogue with clients about AI usage, risks, and outcomes to ensure mutual understanding and trust
10.3 Monitoring and Reporting
To ensure sustained alignment with client policies, Tekai implements robust monitoring and reporting mechanisms:
- Regular Reviews: Conduct periodic reviews of AI workflows to confirm ongoing compliance with client policies.
- Incident Reporting: Promptly informing clients of any deviations or concerns related to AI usage and collaboratively addressing them.
- Feedback Loops: Actively seeking client feedback on AI integration and using it to refine our processes.
10.4 Supporting Client Compliance
Tekai supports clients in meeting their own compliance obligations by:
- Policy Alignment Tools: Providing documentation and tools to help clients understand how our AI practices align with their policies.
- Risk Mitigation: Proactively identifying and addressing potential risks that could impact a client’s compliance with external regulations or internal standards.
- Training and Resources: Offering guidance and resources to clients on best practices for AI integration and policy enforcement.
11. References
- European Union's AI Act: https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
- GDPR Regulation: https://gdpr-info.eu/
- Tekai's Information Security Policy: Tekai Information Security Policy