Amazon Web Services (AWS) Certified Security Specialty (CSS) Beta Exam

I had the opportunity to take the AWS Certified Security Specialty Exam at re:Invent 2016. The exam is in beta phase where questions are being tested, refined and the exam pass line is being set. I won't find out if I passed until March 2017 and I can't share actual exam questions but I can share advice for others that are interested in the exam in the future. Note that as of Jan 2017 the beta is currently closed as it's proved to be very popular.


I entered the exam cold, drawing only on my working knowledge of AWS and its services so my perspective should be an unbiased view of the exam. There is an exam blueprint but it's been pulled from the AWS website.


  • ~3hr Exam Time
  • > 100 Questions
  • Reading Comprehension Questions
  • Question Nuances Where Important
  • Heavy Focus on Services and Service Components with Security Relationship
    • IAM
    • WAF
    • CloudFront
    • ACM
    • Security Groups
    • NACLs
    • VPC
    • etc.

My Exam Perspective:

I found the questions to be very long, requiring significant reading and reading comprehension in order to answer questions. I also found the possible answers to be long and requiring reading comprehension. I had to read a number of questions at least twice to pickup on all of their nuances and be able to differentiate answer validity. The questions for the exam had some substantial parallels to security related questions on other exams. 

NOTICE: All thoughts/statements in this article are mine alone and do not represent those of Amazon or Amazon Web services. All referenced AWS services and service names are the property of AWS. Although I have made every effort to ensure that the information in this article was correct at writing, I do not assume and hereby disclaim any liability to any party for any loss, damage, or disruption caused by errors or omissions, whether such errors or omissions result from negligence, accident, or any other cause.

Amazon Cognito User Pool Admin Authentication Flow with AWS SDK For .NET

Implementing the Amazon Cognito User Pool Admin Authentication Flow with AWS SDK For .NET offers a path to implement user authentication without management of a host components otherwise needed to signup, verify, store and authenticate a user. Though Cognito is largely framed as a mobile service, it is well suited to support web applications. In order to implement this process you would use the Admin Auth Flow outlined in the AWS produced slide below. This example assumes that you have already configured both a Cognito User Pool w/ an App, ensuring the "Enable sign-in API for server-based authentication (ADMIN_NO_SRP_AUTH)" is checked for that app on the App tab and that no App client secret is defined for that App. App client secrets are not supported in the .NET SDK. It is also assumed that a Federated Identity Pool is configured to point to the before mentioned User Pool.

This auth flow bypasses the use of Secure Remote Password (SRP) protocol protections heavily used by AWS to prevent passwords from even been sent over the wire. As a result, when used in a client server web application, your users passwords would be transmitted to the server and that communication must be encrypted with strong encryption to prevent compromise of user credentials. The below code implements a CognitoAdminAuthenticationProvider with Authenticate and GetCredentials members. The Authenticate method returns a wrapped ChallengeNameType and AuthenticationResultType set of responses. A challenge will only be returned if additional details are needed for authentication, in which case you would simply ensure those details are included in the UserCredentials provided to the authenticate method and call Authenticate again. Once authenticated, a AuthenticationResultType will be included in the result and can be used to call the GetCredentials method and obtain temporary AWS Credentials.

Usage of the above code would look something like the below. This example uses the temporary credentials to call S3 ListBuckets.

As an additional note, the options for the CognitoAWSCredentials Logins dictionary are listed below. This example uses the last listed value.

Logins: {
  '': '[FBTOKEN]',
  '': '[AMAZONTOKEN]',
  '': '[GOOGLETOKEN]',
  '': '[DIGITSTOKEN]',
  'cognito-idp.[region][your_user_pool_id]': '[id token]'

NOTICE: All thoughts/statements in this article are mine alone and do not represent those of Amazon or Amazon Web services. All referenced AWS services and service names are the property of AWS. Although I have made every effort to ensure that the information in this article was correct at writing, I do not assume and hereby disclaim any liability to any party for any loss, damage, or disruption caused by errors or omissions, whether such errors or omissions result from negligence, accident, or any other cause.

Getting Started with AWS Lambda C# Functions

For those of us that are .NET developers at heart, we finally have the ability to run server-less C# applications on AWS! Support for the C# Language in AWS Lambda was announced at AWS re:Invent 2016 (1-Dec-2016). This post is a quick getting started guide to help you get started.

C# support in Lambda requires use of .NET Core targeted assemblies as the Core CLR offers cross platform support enabling the Linux based Lambda infrastructure to execute .NET complied binaries. Lambda accepts the zipped build output of .NET Core targeted class library rather than raw code for C# Lambda fucntions. Function handlers are referenced using he syntax <Assembly Name>::<Fully Qualified Class Name>::<Method Name>. Which in the case of a project with output assembly named "myassembly", a namespace of "myassemblynamespace", a class named "myclass" and a method named "mymethod" would be "myassembly::myassemblynamespace.myclass::mymethod". AWS Provides a project type and tooling through its Toolkit for Visual Studio that enable creation of C# Lambda functions however you can build your own from a standard class library project.


  1. Development Environment (See .NET Core Installation Guide)
    • Visual Studio 2015 with Update 3
    • .NET Core Tools
  2. AWS Visual Studio IDE ToolKit
    • AWS SDK for .NET (v3.3.27.0 or greater required)
    • AWS Toolkit for Visual Studio (v1.11.0.0 or greater required)

Required Project References:
  • Amazon.Lambda.Core (Install-Package Amazon.Lambda.Core)
  • Amazon.Lambda.Serialization.Json (Install-Package Amazon.Lambda.Serialization.Json)
  • Amazon.Lambda.Tools (Install-Package Amazon.Lambda.Tools -Pre)

NOTICE: All thoughts/statements in this article are mine alone and do not represent those of Amazon or Amazon Web services. All referenced AWS services and service names are the property of AWS. Although I have made every effort to ensure that the information in this article was correct at writing, I do not assume and hereby disclaim any liability to any party for any loss, damage, or disruption caused by errors or omissions, whether such errors or omissions result from negligence, accident, or any other cause.

FISMA, FedRAMP and the DoD CC SRG: A Review of the US Government Cloud Security Policy Landscape

The Federal Information System Management Act (FISMA), a US Law signed in 2002, defines the information protection requirements for US Government, "government", data and is applicable to all information systems that process any government data regardless of ownership or control of such systems. Systems Integrators (SI) under contract to perform work for the government are almost always provided some government furnished information (GFI) or government furnished equipment(GFE) and FISMA requirements extend to the systems owned and/or operated by these SIs if they store or process government data. Government data always remains under the ownership of the source agency with that agency holding sole responsibility for determining the data's sensitivity level. It is usually a contractual requirement for an SI, charged with management of government data, to ensure FISMA compliance and an SI is obligated to destroy or return all GFI and GFE at the end of contractual period of performance. Government data falls into a number of information sensitivity categories ranging from public information to the highest of classification and the compliance requirements imposed by FISMA increase in lockstep with that sensitivity.

A large portion of government data under the management or control of most SI's will fall in the public or controlled unclassified information (CUI) buckets. Public data is rather straightforward in that it is publicly releasable and if compromised would have little to no impact on the public image, trust, security or mission of the owning government agency and/or its personnel and as such, requires the least compliance overhead.  CUI on the other hand is significantly more complex and nuanced. CUI data could compromise the public image, trust, security or mission of the owning government agency and/or its personnel. As such, CUI data has some restriction applied to its distribution []. With Department of Defense (DoD) data, there are additional types of distribution restrictions defined in DoD Directive (DoDD) 5200.01 v4 [] and a host of marking requirements []. A common misunderstanding of CUI requirements is that, due to its unclassified nature, it does not require significant security consideration. This misunderstanding is something to be cognoscente of in any engagement with government agency or SI relationship and it is advisable to inquire about CUI data restrictions as this area comes with certain legal as well as contractual ramifications.

Data sensitivity is a multifaceted factor that the National Institute of Science and Technology (NIST) breaks down to three areas; Confidentiality, Integrity and Availability “C-I-A category” resented in the format {x,x,x} where “x” will be; low, moderate or high. The highest denominator of these three categories determines the sensitivity and therefore compliance requirements of an information system. Determining the data sensitivity of an information system is a process defined in NIST special publication (SP) 800.60 volume 1 []. This process starts with determining the types of data processed and/or stored by an information system. This is a critical step to ensure accurate compliance implementation. This will enable the selection of data type categories, defined in NIST SP 800.60 volume 2 []. For each data category applicable to an information system NIST 800.60 volume 2 provides a baseline C-I-A category assessment as well as a number of caveats that could dictate a higher or lower categorization assessment for each of the C-I-A categories. An information owner can choose to adjust these assessments based on operational factors however; deviation from a C-I-A category baseline will require justification. This will result in a list of applicable data type categories and assessed C-I-A categorization for each. The highest categorization across the three C-I-A categories for all data types becomes the baseline level for the information system. The output of this process is generally a document describing the applicable data type categories and the assessed C-I-A categorization for each, with required justifications. This document generally requires review and signature by the system owner and an organizations authorization authority. Any change in data processed or stored by an information system should trigger a new iteration of this and all subsequent processes.

Compliance requirements come in the form of auditable states for various aspects of an information systems infrastructure, architectural design, implementation and the policies and practices, “governance”, established surrounding that systems management. There are generally two different types of controls; security controls and specific vendor product or process controls, "implementation controls". Security controls are high level and cover a broad requirement for an information system that often involve a number of physical implementation aspects and or process documentation components to meet. Implementation controls are often very specific requiring verification of state across multiple components to role-up to security control compliance. The NIST SP 800 series documents [ 800], stemming from the need to define guidelines for compliance with FISMA and other laws, are the basis for government agency compliance programs. These programs draw from the security controls defined in NIST SP 800.53 [] Appendix D. Organizations across the government are responsible for implementation of their own security and compliance programs as required by FISMA. As a result, processes vary across agencies, though most are implementations of NIST SP 800.37 [] that describes a process called the Risk Management Framework (RMF). RMF is a risk-based approach to addressing information technology (IT) security with emphasis placed on control compliance priority and assessing the overall risk posed by a system. NIST SP 800.53 Revision 4 Appendix D defines three control-baselines; low, moderate, high corresponding to the assessed data sensitivity level of an information system. Selection of NIST controls for a given information system within non-DoD government organizations generally falls to agency specific security requirement determinations. NIST also defines the concept of creating overlays, which are purpose driven control sets, the privacy overlay(s) [] being one of the most commonly mentioned, which are generally consistent across organizations and may overlap or be additive to an organizations own security requirements.

The White House established a cloud first policy, across government agencies, in 2011 [] that began a new imitative to evaluate commercial cloud service offerings (CSO) provided by cloud service providers (CSP) before the use of government owned hosting solutions. This policy, established in large part as a measure to address the huge IT budge across the government, recognized that industry was far more agile and had far more resources than the government to produce innovative and cost effective solutions. In implementing this policy, agencies began assessing CSP CSOs to ensure that they met FISMA requirements. The result of these assessments were authorizations to operate (ATO) for systems, leveraging the same underlying cloud infrastructure, that began to overlap with wasteful duplication of work across agencies. The Federal Risk and Assessment Management Program (FedRAMP) was the solution, creating a common assessment program and a set of three control baselines, low, moderate and high, based on NIST SP 800.53 controls, for CSP CSOs such that Provisional ATOs (P-ATO), attesting a CSPs contribution to a systems control coverage, could be shared and trusted across agencies. The term provisional is used in that they are components of a full systems ATO not a full system in themselves. It follows that FISMA applies to all government systems and FedRAMP is a specific program for CSPs to implement and assess compliance with FISMA requirements for their CSOs covering those aspects within the boundaries of a CSPs purview. This enables their customers to reuse, "inherit", the compliance assessments, already completed for CSP CSOs, reducing the overall workload and cost of implementing security for government IT systems.

Each of the FedRAMP control baselines, represent a tiered compliance level aligned to NIST SP 800.60 volume 2 data categorizations for data sensitivity, with an escalating number of applicable NIST SP 800.53 controls []. FedRAMP authorizations come in two forms, agency specific ATOs and those granted by the FedRAMP joint assessment board (JAB), which itself is a joint venture between government agency chief information officer (CIO)s. An agency ATO is one in which a specific agency has assessed the compliance of a CSO against their specific security requirements and granted an ATO to the specific CSO which can then be leveraged as a starting point for the next agency that comes along and uses that CSO. The security requirements, and hence implemented controls, used may or may not meet those of the next organization and therefore may not be reusable. A JAB P-ATO on the other hand requires compliance with a common control baseline and is the most stringent path that a CSP can take to FedRAMP compliance. In this path, a CSP prepares a documentation package covering their compliance implementation to meet a specific FedRAMP control baseline. A 3rd Party Assessment Organization (3PAO), accredited by the FedRAMP program management office (PMO), must concur with the CSPs independent control assessment. Each of the JAB members then further reviews it and must concur before granting a P-ATO. FISMA requirements continue to apply to the systems implemented on top of FedRAMP authorized CSP CSOs and the system owner is responsible for any deltas in control compliance beyond those covered under the CSP CSO authorization.

For many years, the DoD defined their own compliance program, called the Defense Information Assurance Certification and Assessment Process (DIACAP), described in DoD Instruction (DoDI) 8510.01, with their own security controls. Reissuance of this program in 2015 saw a rename, RMF for DoD IT, “RMF”, and a shift to NIST SP 800.37 processes as well as NIST 800.53 security controls. This established a new path to security implementation across DoD systems as a whole but had a particular impact on the DoD's ability to leverage P-ATO's granted under the FedRAMP program. Older established information systems have the leeway of a grace period to convert from DIACAP to RMF however; all DoD cloud systems are required to implement RMF from the start. The DoD is by far the largest of government organizations and is a huge target for attackers with a treasure trove of information spread across sprawling and sometimes ancient IT systems. Because of the DoD’s huge attack surface and the sensitivity of its mission, the DoD requires more stringent security controls and process than those imposed on other government agencies. Fast forwarding through several years of complex work to change an organization as large as the DoD and with significant support of the DoD CIO, the Defense Information Systems Agency (DISA) was given the task of defining and documenting the gaps between the FedRAMP baselines and DoD requirements. DISA delivered a document in January of 2015 called the Cloud Computing (CC) Security Requirements Guide (SRG).

The CC SRG, also branded as FedRAMP+, inherited some terminology of earlier documentation attempts, called Impact Levels (IL), which have evolved to align to FedRAMP’s baseline levels. The CC SRG control requirements are specifically based on FedRAMP moderate baseline controls and a CSP must meet the moderate baseline control set for a DoD authorization. IL 2, there is no IL 1, aligns with the FedRAMP moderate baseline and is applicable to IT systems processing or storing a maximum of publicly releasable data where the NIST C-I-A categorization of the system is low to moderate. IL 4, there is no IL 3, aligns with the FedRAMP moderate baseline and is applicable to IT systems processing CUI data where the NIST C-I-A categorization of the system is moderate. IL 4 systems may include those that process and/or store Personally Identifiable Information (PII), Personal Health Information (PHI), etc. however, may require application of additional overlay controls if the information system meets certain Privacy Act [] criteria. It may be possible to consider some systems where the NIST C-I-A categorization of the system is high as IL 4 if the authorization of the CSP CSO is up to the high baseline and the sensitivity of data does not cross the national security systems (NSS) threshold. IL 5 aligns with the FedRAMP high baseline and is applicable to more sensitive CUI as well as NSS where the NIST C-I-A categorization of the system is high. At present time, there are physical separation concerns that prevent IL 5 workloads from deployment on commercial cloud platforms. Finally, IL 6 is beyond FedRAMP program alignment and aligns with data categorized at the SECRET level making it generally out of scope for CC SRG documentation as such data requires physical environment isolation that does not map to public models.

The CC SRG stipulates a requirement that IL 4 and IL 5 workloads remain isolated from the internet and connect to the non-secure internet protocol router network (NIPRNet) via direct circuit or internet protocol security (IPsec) virtual private network (VPN) to a NIPRNet edge gateway. To support the IL 4 and IL 5 NIPRNet connection requirement, DISA has defined the concept of a Boundary Cloud Access Point (BCAP), “CAP” that acts as the gateway between a CSPs CSO and the NIPRNet edge. DISA has the task of providing an Enterprise CAP for DoD systems leveraging authorized cloud services. DISA delivered an initial CAP capability in late 2015 and later, a functional requirements (FR) document called the Secure Cloud Computing Architecture (SCCA) [] describing their desired future state. A CAP serves two primary purposes, connectivity between a CSP and the NIPRNet and protection of the DoD Information Network (DoDIN), a broad term for DoD networks, from threats originating from a CSP CSO. In practice, today's CAP is comprised of a common point, referred to as a meet-me-point where CSP and DoD infrastructure can meet, a co-location facility (co-lo), as well as the appropriate security stack to monitor and protect against threats. IL 4 and IL 5 systems may pass outbound traffic to the internet through the NIPRNet internet access points (IAP) and may accept traffic inbound from the IAP however, may require whitelist for inbound traffic.

DoD organizations may present a case for deployment of their own CAP solution aligned with FR specifications however; this requires DoD CIO approval and a compelling use case. The Navy Space and Naval Warfare Systems Command (SPAWAR) System Center Atlantic, “SSC LANT”, began their cloud exploration several years before DISA gained their cloud roles and responsibilities and before DoD, policy was ready for commercial cloud services. The commercial service integration integrated project team (IPT) began working through many of the challenges that have enabled the DoD as a whole to move toward cloud with the support of the Navy CIO at the time who later became the DoD CIO, Terry Halvorsen. As part of those efforts, the Navy established a CAP capability, which operates today in a concurrent fashion with the DISA Enterprise capability.

The authority to authorize information systems within the DoD under the RMF program resides at the CIO level however generally becomes delegated to authorizing officials (AO), aligned to organizational verticals. An AO is the final authority that signs a system ATO granting it the authority to operate given the compliance mechanisms documented in the systems security package. The structure of AO authority delegation varies significantly across the DoD with services like the Air Force having a highly distributed authority and services like the Army having a more centralized approach. This structure and civilian or military staff filling these roles change frequently due to assignment rotations. The structure of the review process for any system security package will be specific to that organization however, it follows the general model of having a review organization who reviews documentation then presents the risk to the AO for acceptance. In some cases, these organizations may conduct only documentation reviews while in others they may leverage a team of auditors to validate control compliance.

The RMF process is a circular process flow (step 1) system categorization, (step 2) security control selection, (step 3) security control implementation, (step 4) security control assessment, (step 5) system authorization and (step 6) monitoring security controls. Steps 1 and 2 in the RMF process directly align with the steps required to determine an information systems IL. Therefore, it naturally follows that starting RMF leads to determining an information system IL. The IL of a system is the key component in determining if a CSPs CSO authorization is sufficient to support an information system’s needs. It follows that; a CSP CSO authorization must meet or exceed the IL of the proposed system workload. From that point forward, the IL becomes less relevant and the focus shifts primarily to implementation and then assessment of the security controls selected after system categorization. DISA, as a broader mission responsibility, defines both SRGs for other broad technologies and security technical implementation guides (STIGS) [] for specific vendor products. STIGs provide very detailed checks for product configuration, all targeted at compliance with a higher-level security control. As implementation and assessment progress, STIG evidence, “checklists”, serve as compliance evidence for a systems eventual risk assessment. There is overlap in intent across NIST security controls and STIG checks may apply to several different security controls. For this reason, a Control Correlation Identifier (CCI) [], maps overlapping NIST security controls and STIGs that address a security control. It is important to note that not all STIG checks map to a CCI, not all CCIs will have mapped STIG “checklists” and even when mapped may not provide complete CCI coverage. In a large environment, you might imagine that multiple STIGs could apply to every server, “instance”, and often a STIG is applies multiple times across instances in an environment. CCIs not supported completely or in part by STIG, checklists require documentation. This documentation called a system security plan (SSP) covers an information systems CCI compliance supported by STIG checklists and system governance processes to facilitate system risk acceptance. The format of an SSP may be specific to the authorizing organization however; there is spotty coverage of DoD SSP templates in the wild. A DoD SSP will likely be different that a FedRAMP SSP given the emphasis on CCIs and therefore the FedRAMP SSP templates generally do not apply to DoD systems.

DoD information systems must also comply with requirements of the Cyber Incident Handling Program defined in DoDI 8530.01 [], “cyber defense” or “C2”. The cyber defense program establishes a three-tiered reporting chain, covering threat detection and incident response, staring with US Cyber Command (USCYBERCOM) at tier one extending to mission system owners at tier three. In between, at tier two, several participants enable both communications surrounding and oversight of cyber threat monitoring. Of those participants, the Boundary Cyber Defense (BCD) and Mission Cyber Defense (MCD) roles are the most important for cloud. The BCD role is to monitor and protect the DoDIN edge, in this case the NIPRNet edge via a NIPRNet Federated Gateway (NFG), where a meet-me-point connects to the NIPRNet. The CAP provider should establish the required BCD relationship. The MCD role must be filled by a Cyber Defense Service Provider (CDSP), oft referred to as a CNDSP due to prior lexicon, who themselves must be accredited by USCYBERCOM and provides an oversight role to the tier 3 mission system owner. All mission systems must align with an accredited CDSP in order to connect to the DoDIN. Depending on a mission systems organizational alignment, they will fit most appropriately with one or another CDSP however if that CDSP is unable to provide support DISA generally acts as the provider of last resort. Alignment with a CDSP generally takes the form of a signed memorandum of agreement (MOA) or service level agreement (SLA) and requires some exchange of funds between governmental organizations. Obtaining a cloud project connection consent (CPTC) to the DISA CAP from the DISA Cloud Office requires documentation of this relationship. Since this alignment process can take some time, it is best to contact the appropriate CDSP at the very beginning of any DoD cloud project.

Amazon Web Services (AWS) specifically provides CSOs authorized in bundles along the boundaries of their regions. Each service is considered a CSO and a list of CSOs covered under AWS authorizations is provided on the AWS DoD SRG Compliance site []. The US East and US West region authorization under the FedRAMP program is at the moderate control baseline and the GovCloud authorization is at the high control baseline. AWS provides an Enterprise Accelerator - Compliance for NIST-based Assurance Frameworks [] and a security control matrix [] that explains both how AWS services align to the NIST framework and the controls that AWS is responsible for maintaining partial or complete compliance with. For non-DoD government systems, region selection starts with identifying the available regions with the proper authorization level and should consider both cost and geo-alignment factors. For DoD systems, region selection is a bit more direct in that public IL 2 systems can choose from all CONUS regions and all other DoD workloads at IL 4 must use GovCloud. AWS is not able to support IL 5 workloads today for the DoD due to physical separation concerns. At the further end of the spectrum, AWS may be able to offer support for IL 6 workloads and a category not identified in the CC SRG for higher classification levels via completely isolated private service regions. GovCloud is sometimes confused as being the AWS IL 6 service capability however, that is not the case. An area hereunto uncovered is other legal obligations surrounding data, especially those subject to the International Trafficking in Arms Regulation (ITAR) or Export Administration Regulation (EAR), here forth "ITAR",  both of which restricts transfer of certain military, industrial or manufacturing information internationally. The GovCloud region complies with ITAR responsibilities covering the CSOs provided through that region and follows the standard AWS shared responsibility model []. AWS does not restrict customer use of GovCloud once vetted to meet requirements for access; the account owner, holder of the root account, must be a US person on US soil with a legitimate need to access the region. Customers must then implement proper controls on their infrastructure and governance to meet ITAR requirements.

NOTICE: All thoughts/statements in this article are mine alone and do not represent those of Amazon or Amazon Web services. All referenced AWS services and service names are the property of AWS. Although I have made every effort to ensure that the information in this article was correct at writing, I do not assume and hereby disclaim any liability to any party for any loss, damage, or disruption caused by errors or omissions, whether such errors or omissions result from negligence, accident, or any other cause.

Using Linqpad to Query Amazon Redshift Database Clusters

Looking for a quick and easy way to query an Amazon Redshift Database Cluster? I was and the first place I turned was to my favorite tool for this kind of thing, Linqpad. I was a bit dismayed to find that none has developed, that I could find, a Linqpad database driver for Redshift. Small note, there are a few Postresql options and Redshift is supposed to be Postresql compatible however, none of them seemed to work for Redshift.

Giving credit to the author of this article  describing the use of Linqpad for connections to MS Access, I made a few few tweaks and boom, I have a working way to connect to and query Redshift. So in the pay it forward spirit, I thought I'd share.

// (1) Copy and paste this entire block of code into a Linqpad query window, no connection needed, and change language to C# Statement(s).
// (2) To use the .NET ODBC assembly, you'll have to press F4 then click on the "Additional Namespace Imports" tab. Add "System.Data.Odbc",
//     no quotes, on a single line and click OK.
// (3) Install the x86 Amazon Redshift ODBC Driver ( The 
//     x64 driver does not work
// (4) Update the query settings.

// ************************************************ Update Settings Below ************************************************
string endpoint = "";
string database = "";
string user = "";
string pass = "";
string port = ""; //Default is 5493

string table = "";
string query = "SELECT * FROM " + table; //Optionally Update Query
// ************************************************ End Update Settings Section ************************************************

// ************************************************ Do Not Modify Below ************************************************
string connectionString = "Driver={Amazon Redshift (x86)}; Server="+endpoint+"; Database="+database+"; UID="+user+"; PWD="+pass+"; Port="+port;

using(OdbcConnection connection = new OdbcConnection(connectionString)) 
 Console.WriteLine("Connecting to ["+connectionString+"]");
  Console.WriteLine("Executing query ["+query+"]");
  if (query.StartsWith("SELECT", StringComparison.OrdinalIgnoreCase))
   using (OdbcDataAdapter adapter = new OdbcDataAdapter(query, connection))
    DataSet data = new DataSet();
    adapter.Fill(data, table);
    Console.WriteLine("Found ["+data.Tables[0].Rows.Count+"] rows");
   using (OdbcCommand command = new OdbcCommand(query, connection))
    var impactedRows = command.ExecuteNonQuery();
    Console.WriteLine("["+impactedRows+"] rows impacted");
 catch (Exception ex)
// ************************************************ End Do Not Modify Section ************************************************

NOTICE: All thoughts/statements in this article are mine alone and do not represent those of Amazon or Amazon Web services. All referenced AWS services and service names are the property of AWS. Although I have made every effort to ensure that the information in this article was correct at writing, I do not assume and hereby disclaim any liability to any party for any loss, damage, or disruption caused by errors or omissions, whether such errors or omissions result from negligence, accident, or any other cause.