Saturday, August 23, 2014
Saturday, August 2, 2014
Thursday, June 26, 2014
Wednesday, June 11, 2014
SOA School: Architecting Watertight Security for the New Enterprise
0
Security
is an ever-present concern for IT. It can be a rather daunting area
when one considers all of the different possible dangers and the large
variety of solutions to address them. But, the aim of Enterprise Security really just boils down to establishing and maintaining various levels of access control. Mule itself has
always facilitated secure message processing both at the level of the
transport, the service layer and of the message . Mule configurations
can include all that Spring Security has to offer giving, for example, easy access to an LDAP server for authentication and authorisation. On top of that Mule Applications can apply WS-Security thus facilitating, for example, the validation of incoming SAML messages. But in this post, rather than delve into
all the details of the very extensive security feature set , I would
rather approach the subject by considering the primary concerns that
drive the need for security in a Service Oriented Architecture,
how the industry as a whole has addressed those concerns, the
consequent emergence of popular technologies based on this Industrial
best practice and finally, the implementation of these technologies in
Mule.
Primary Concerns
Integrity
This is all about knowing who sent the
Message. Securing our IT resources is a matter of deciding who gets
access to them and of course, entering into the realms of Authorisation,
to what extent each person or system should have access to them. A
Message (which is also akin to a service invocation) must be determined
to be authentic in order for the Server to accept it and process it.
It’s authentic if the Server recognises the Client as a valid user of
the service. Now such recognition is usually achieved by some sort of
Credentials accompanying the Message. However, verifying
which Client sent the Message does not guarantee the Integrity of the
Message: it may have been modified by some unfriendly third party during
transit! Message Integrity, which
includes Authentication, guarantees that the Message the Server
received is exactly the one that was sent by the known Client.
Confidentiality
It is all very well for the Server to rest assured with the Integrity
of a Message sent by a known Client, but the journey from Client to
Server may have been witnessed by some unwelcome spies who got to see
all of those potentially very private details inside the Message! Thus,
it is necessary to hide those details from the point of delivery by the
Client to the reception by the Server. An agreement is needed between
the Client and Server in order to be able to hide the details of the
Message in a way that allows only the Server to uncover them and vice
versa.
Response of the Industry
Token based Credentials
The common practice of sending Username / Password pairs with Messages is not recomendable from two perspectives:
- Passwords have a level of
predictability whereas the ideal is to maximise on randomness or
entropy. Username / Password pairs are a low entropy form of
authentication. - Also, the maintenance
of Passwords is a pain! If you need to change a password then you
immediately affect all Clients that make use of that password. Until
each of these has been reconfigured you have broken communication with
them. As a consequence there is no way you can block access to one Client in particular without blocking all the Clients that use the same password.
a more secure form of Authentication and, as we’ll see, Authorisation.
The idea is for the Server to issue tokens based on an initial
authentication request with Username / Password credentials. From then
on the Client only has to send the token, so the net result is a great
reduction in Username / Password credentials going to and fro on the
network. Also, Tokens usually are issued with an expiration period and
can even be revoked. Furthermore, because they are issued uniquely to
each Client, when you choose to revoke a particular Token or if it
expires, none of the other Clients will suffer any consequences.
Digitial Signing
We humans sign all kinds of documents when it matters in the civil,
legal and even personal transactions in which we partake. It is a
mechanism we use to establish the authenticity of the transaction. The
digitial world mimics this with its use of Digital Signatures.
The idea is for the Client to produce a signature by using some
algorithm and a secret code. The Server should apply the same algorithm
with a secret code to produce its own signature and compare the incoming
signature against this. If the two match, the Server has effectively
completed Authentication by guaranteeing not only that this Message was
sent by a known Client (only a known Client could have produced a
recognisable signature), but that it has maintained its integrity
because it was not modified by a third party while in transit. As an
added benefit for when it matters with third party Clients, the
mechanism also brings Non-repudiation into the equation because the
Client cannot claim not to have sent the signed Message.
Public Key Cryptography
The age old practice of Cryptography has made a science of the art of hiding things! IT has adopted this science and can produce an Encryption
of the message which is practically impossible to decrypt without a
corresponding key to do so. It is as if the Client had the ability to
lock a Message inside some imaginary box with a special key, hiding it
from prying eyes, until the Server unlocks the box with its own special
key. The Digital Signing discussed above produces signatures in this
very way. Cryptography comes in two forms: Symmetric, when both Client
and Server share the same key to encrypt and decrypt the Message; and
Asymmetric, when the Server issues a public key to the Client allowing
the Client to encrypt the Message, but keeps a private key which is the
only one that can decrypt the Message: one key to lock the Message and
another key to unlock it!
Defacto Standard Implementations
HTTPS
This is a rock solid standard protocol that implements at the level
of the transport both Integrity and Condifentiality at the same time.
Public Keys are emitted on Certificates which have been digitally signed
by independant and trusted Certificate Authorities, thus guaranteeing
that the public key was issued by the Server. Once the initial handshake
has been completed by the exchange of Messages using public and private
keys, the communication switches to the more efficient symmetric form
using a shared key generated just for the duration of the communication,
all of which occurs transparently.
OAuth2
This emerging standard governs the world of Authorisation using Tokens. I won´t go into all of the complexities of the complete OAuth2 dance here, but I can recommend OAuth2 as a valid way to secure our Enterprise and which scales well to meet the needs of the SAAS oriented New Enterprise. To that end there are two types of Clients we should cater for in our Secured SOA Architecture:
- The in-house Applications which are typically exposed to end-users.
These should provide the username and password of the end-user and
request a token on the strength of those. The process also affords us
with the luxury of Single-Sign-On,
because the token can be stored by the browser as a cookie based on the
domain name of the organisation. All other web applcations can access
the cookie. This is what Google are doing with their SSO for each of
their cloud apps like Gmail, Calendar, Documents, etc. - Third-party Applications providing services to our users but to
which we’d like to grant limited access to our systems. We don’t want
those Applications getting their hands on our end-users’ credentials, so
we can force them through the typical OAuth2 dance which is what we all
see so often nowadays when websites invite us to sign in using our
Google, Facebook or Twitter accounts.
Let’s implement a RESTful webservice
in Mule which will expose a list of Products in our online shop to
various Client Applications. We will configure the access control so
that certain operations are available only to certain Clients. We could
even apply more specific access control by considering the roles of the
users of these Applications: Admin for complete access and Standard for
read-only access.
HTTPS Inbound Endpoint
The https inbound endpoint on our API needs to use a connector
with a reference to a keystore. A keystore is a repository of public
key certificates together with their private keys. These certificates
are sent to the Client upon the first HTTPS request. The certificate
contains the public key and identity of the server and is digitally
signed either by the same server (self-signed certificate) or by an
independent Certificate Authority. You can create your own self-signed
certificate for development purposes using the JDK keytool utility. The keystore needs a password both for the keystore and for the private key.
1 2 3 |
<https:connector name="httpsConnector" cookieSpec="netscape" validateConnections="true" sendBufferSize="0" receiveBufferSize="0" receiveBacklog="0" clientSoTimeout="10000" serverSoTimeout="10000" socketSoLinger="0" doc:name="HTTP\HTTPS"> <https:tls-key-store path="src/main/resources/keystore.jks" keyPassword="mule123" storePassword="mule123"/> </https:connector> |
view raw
https-connector.xml
hosted with ❤ by GitHub
https-connector.xml
hosted with ❤ by GitHub
Anypoint Secure Token Service
Mule can now act as an OAuth2 Provider, issuing tokens to registered
Clients, applying expiration periods to these tokens, associating them
to User roles and fine-grained access control known in the OAuth
world as scopes. Refresh tokens can also be issued and tokens can be
invalidated. Mule can of course subsequently validate incoming tokens
against expiration periods, roles and scopes and thus grant or deny
access to the Flows in the Application.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 |
<oauth2-provider:config host="0.0.0.0" port="8081" name="oauth2-provider" accessTokenEndpointPath="access-token" scopes="READ WRITE" resourceOwnerSecurityProvider-ref="security-provider" supportedGrantTypes="IMPLICIT RESOURCE_OWNER_PASSWORD_CREDENTIALS" tokenTtlSeconds="300" enableRefreshToken="true" authorizationEndpointPath="authorization-code" loginPage="login.html" connector-ref="httpsConnector" doc:name="OAuth provider module"> <oauth2-provider:clients> <oauth2-provider:client clientId="customer-web-app" type="PUBLIC" clientName="Web UI" description="Javascript web UI"> <oauth2-provider:authorized-grant-types> <oauth2-provider:authorized-grant-type>PASSWORD</oauth2-provider:authorized-grant-type> </oauth2-provider:authorized-grant-types> <oauth2-provider:scopes> <oauth2-provider:scope>READ</oauth2-provider:scope> </oauth2-provider:scopes> </oauth2-provider:client> <oauth2-provider:client clientId="admin-web-app" type="PUBLIC" clientName="Private Web UI" description="Javascript web UI"> <oauth2-provider:authorized-grant-types> <oauth2-provider:authorized-grant-type>PASSWORD</oauth2-provider:authorized-grant-type> <oauth2-provider:authorized-grant-type>REFRESH_TOKEN</oauth2-provider:authorized-grant-type> </oauth2-provider:authorized-grant-types> <oauth2-provider:scopes> <oauth2-provider:scope>READ</oauth2-provider:scope> <oauth2-provider:scope>WRITE</oauth2-provider:scope> </oauth2-provider:scopes> </oauth2-provider:client> </oauth2-provider:clients> </oauth2-provider:config> |
view raw
OAuth2 Provider.xml
hosted with ❤ by GitHub
OAuth2 Provider.xml
hosted with ❤ by GitHub
The above configuration registers 2 different clients for our API:
- Web UI: a public web application providing read-only access to the protected Product listing.
- Private Web UI: an internal admin app which allows Administrators to add new Products.
applications as described above and as such may exchange User
Credentials directly for a Token. For example,
1 2 |
POST https://localhost:8081/access-token grant_type=password&client_id=admin-web-app&username=nialdarbey&password=hello123&scope=READ%20WRITE |
view raw
request-token
hosted with ❤ by GitHub
request-token
hosted with ❤ by GitHub
This would give a Response something like:
1 2 3 4 5 6 |
{ "scope":"READ WRITE", "expires_in":299, "token_type":"bearer", "access_token":"l8bFMEC9PA7NcpmHeTYS43Wl96_Y6LuIOhGci2zMJf0Qso9llgRLkgQjarMzUhvQz8vGVHmazrZ2C-Gjo20khg" } |
view raw
oauth2-access-response.json
hosted with ❤ by GitHub
oauth2-access-response.json
hosted with ❤ by GitHub
The request includes both scopes READ and WRITE. These appear in the
requestable scopes for that particular client. Scopes represent broad
levels of access to the Mule flows. The provided access token must be
sent in with each request and can be validated by Mule to ensure it
hasn’t expired or been revoked and that it has the scopes that
correspond to a particular flow. In the following example we only allow
requests that have WRITE scope.
1 2 3 4 5 6 7 |
<flow name="createProduct" doc:name="createProduct"> <oauth2-provider:validate scopes="WRITE" config-ref="oauth2-provider" doc:name="Validate WRITE scope" /> <logger level="INFO" doc:name="Logger"/> <jdbc-ee:outbound-endpoint exchange-pattern="request-response" queryTimeout="-1" connector-ref="Database" doc:name="Select Manufacturers"> <jdbc-ee:query key="insert" value="insert into products (name, description) values (#[payload.name], #[payload.description])"/> </jdbc-ee:outbound-endpoint> </flow> |
view raw
create-product.xml
hosted with ❤ by GitHub
create-product.xml
hosted with ❤ by GitHub
More fine grained control can also be applied by comparing the role
of the user for whom the token was issued with the allowed roles for the
flow. The validate filter has a resourceOwnerRoles attribute to specify these. (The granularity of access control can be in either the grant or the role).
1 |
<oauth2-provider:validate resourceOwnerRoles="Administrator" scopes="WRITE" config-ref="oauth2-provider" doc:name="Validate WRITE scope" /> |
view raw
create-product-role.xml
hosted with ❤ by GitHub
create-product-role.xml
hosted with ❤ by GitHub
As we venture into the world of the New Enterprise, no doubt we may
have to cater for applications belonging to partners. Imagine we were to
expose access to our service to a Mobile Application. We need only
register this new client in our OAuth2 Provider configuration. Note how
the grant type for this configuration is TOKEN which corresponds to the
IMPLICIT type according to the OAuth2 specification. This will result in
the full dance that we all have experienced when Websites allow us to sign in using our Google, Facebook or Twitter accounts.
1 2 3 4 5 6 7 8 9 10 11 |
<oauth2-provider:client clientId="partner-smartphone-app" type="PUBLIC" clientName="Smartphone App" description="Smartphone app produced by partner"> <oauth2-provider:redirect-uris> <oauth2-provider:redirect-uri>http://localhost*</oauth2-provider:redirect-uri> </oauth2-provider:redirect-uris> <oauth2-provider:authorized-grant-types> <oauth2-provider:authorized-grant-type>TOKEN</oauth2-provider:authorized-grant-type> </oauth2-provider:authorized-grant-types> <oauth2-provider:scopes> <oauth2-provider:scope>READ</oauth2-provider:scope> </oauth2-provider:scopes> </oauth2-provider:client> |
view raw
implicit-client.xml
hosted with ❤ by GitHub
implicit-client.xml
hosted with ❤ by GitHub
Finally
Anypoint Enterprise Security
also allows us to explicitly sign Messages and verify incoming
signatures and encrypt Messages with 3 different strategies and 20
different algorithms as well as to decrypt incoming Messages. There may
be cases when you have to explicitly sign or encypt Messages you send
out to third parties or likewise decrypt and verify signatures from
third parties. For the purpose of the Clients over which we have
complete control in our architecture, it is sufficient to use HTTPS, but
for those sideline cases you have all the power of the best that the
Industry has to offer in the extremely easy configurations that Mule
demands of you! You can download the above example Application here.
Related posts:
Filed under: Mule ESB | Social tagging: best practices > Mule Enterprise Security > Mule ESB > Security
Free Online Training
Learn basic concepts, architecture, core components and configurations.
Get Started
Evolution of SOA
How today's leading organizations develop SOA with lower upfront investment and risk.
Download the Whitepaper
APIs: A new path to SOA
Long ago, enterprise
companies started using software to manage information across the
enterprise. As the number of systems within the enterprise grew, the
need to synchronize all these isolated systems emerged. From this came
the need to find a solution allowing system-to-system communication, and
the easiest approach for this was point-to-point integration. Soon,
enterprises found themselves with several systems interconnected via point-to-point integration,
resulting in a maintenance nightmare. Rather than being unified, each
system had it’s own communication protocol – some were file based,
others were databases, and if we were lucky, some of them used web services.
companies started using software to manage information across the
enterprise. As the number of systems within the enterprise grew, the
need to synchronize all these isolated systems emerged. From this came
the need to find a solution allowing system-to-system communication, and
the easiest approach for this was point-to-point integration. Soon,
enterprises found themselves with several systems interconnected via point-to-point integration,
resulting in a maintenance nightmare. Rather than being unified, each
system had it’s own communication protocol – some were file based,
others were databases, and if we were lucky, some of them used web services.
The mess began to look something like this:
Point-to-point integrations
were typically done with little or no documentation. With numerous
systems and poor documentation, it was hard to keep track of which
system was doing what. Moreover, it became difficult to reuse
integrations. As a result, users started creating duplicate behavior
within different systems, further increasing the complexity.
were typically done with little or no documentation. With numerous
systems and poor documentation, it was hard to keep track of which
system was doing what. Moreover, it became difficult to reuse
integrations. As a result, users started creating duplicate behavior
within different systems, further increasing the complexity.
The software world realized that it was impractical to
continue working like that; there needed to be a better way to organize
interconnected systems.
continue working like that; there needed to be a better way to organize
interconnected systems.
What is SOA?
We can define service oriented architecture (SOA) as an architectural pattern. Some of its main pillars are:
Services must be reusable
Service must have a service contract
Services can be easily discovered
Services can be composed into other services.
When designing systems with these pillars in mind, it
becomes easy to build applications that provide reusable services.
Having an easy way of discovering those services and a clear interface
to communicate with them promotes reusability of software components and
allows the creation of systems that provide more concrete services by
composing services from other systems.
becomes easy to build applications that provide reusable services.
Having an easy way of discovering those services and a clear interface
to communicate with them promotes reusability of software components and
allows the creation of systems that provide more concrete services by
composing services from other systems.
You can argue that we still have the point-to-point
nightmare depicted in the “spaghetti” picture above. That is where the
last pillar comes in: Service location transparency.
nightmare depicted in the “spaghetti” picture above. That is where the
last pillar comes in: Service location transparency.
Service location transparency
means that the client of a certain service must be agnostic of it’s
physical location. Most of the time, this is accomplished by using a
service bus that decouples the service location from the service
consumer.
means that the client of a certain service must be agnostic of it’s
physical location. Most of the time, this is accomplished by using a
service bus that decouples the service location from the service
consumer.
What are APIs?
Application Programming Interfaces (APIs)
have been around for quite some time, yet their popularity has recently
skyrocketed. Whenever you use a third party library within your
application you are using an API, whether it’s correctly defined or not,
to communicate to it. APIs
by definition are source code based specifications intended to be used
as interfaces by software components to communicate with each other.
have been around for quite some time, yet their popularity has recently
skyrocketed. Whenever you use a third party library within your
application you are using an API, whether it’s correctly defined or not,
to communicate to it. APIs
by definition are source code based specifications intended to be used
as interfaces by software components to communicate with each other.
Why is everybody talking about APIs?
IT has evolved in a way that has given APIs another
meaning. Generally, when the term “API” is brought up, the first thing
that comes to mind are those of publicly available APIs like Twitter or Twilio – and there’s a good reason for it.
meaning. Generally, when the term “API” is brought up, the first thing
that comes to mind are those of publicly available APIs like Twitter or Twilio – and there’s a good reason for it.
In the last 10 years the number of APIs has grown
dramatically. Company applications are not isolated anymore, but rather,
we see SaaS
applications connecting to other SaaS applications. You can see this
everyday. Whenever you visit a new site and it allows you to log in
using your Google account, there’s communication going on between the
site application and the Google API to authenticate you.
dramatically. Company applications are not isolated anymore, but rather,
we see SaaS
applications connecting to other SaaS applications. You can see this
everyday. Whenever you visit a new site and it allows you to log in
using your Google account, there’s communication going on between the
site application and the Google API to authenticate you.
How do SOA and APIs play together?
API expansion required a “simple” way to create APIs.
“Simple” in a way that allows any type of client to rapidly develop a
consumer of that API providing faster time to value. Therefore, APIs
share the following SOA pillars:
“Simple” in a way that allows any type of client to rapidly develop a
consumer of that API providing faster time to value. Therefore, APIs
share the following SOA pillars:
Services must be reusable – APIs are defined to be public so they can be reused by other applications.
Services must have a service contract – This is a must since it’s an API.
Services can be easily discovered – Most APIs have good documentation with lot of examples.
Services can be composed into other services – Companies such as ark.com provide an API that behind the scenes is composing services provided by other APIs.
When defining reusable services within a
SOA-initiative-driven company, one must take into consideration who the
service consumers are and how to manage them. For instance, a single
service consumer may try to invoke a service thousands of times per
second. Managing this service access is called SOA governance
and requires managing permissions for which consumers can access which
services and defining SLAs, which may involve policies such as
throttling service access.
SOA-initiative-driven company, one must take into consideration who the
service consumers are and how to manage them. For instance, a single
service consumer may try to invoke a service thousands of times per
second. Managing this service access is called SOA governance
and requires managing permissions for which consumers can access which
services and defining SLAs, which may involve policies such as
throttling service access.
With APIs, it’s pretty much the same, but at a much larger
scale. There are public APIs and different kinds of consumers for that
API such as partners and developers. Managing all these is called API Management and is similar to the concept of SOA governance. An example of an API management tool is MuleSoft’s Anypoint API Manager.
scale. There are public APIs and different kinds of consumers for that
API such as partners and developers. Managing all these is called API Management and is similar to the concept of SOA governance. An example of an API management tool is MuleSoft’s Anypoint API Manager.
APIs can be seen as a means to a SOA implementation.
Companies will continue to push for the pillars of SOA but the set of
technologies used is going to match the ones that are emerging from the
API world. One clear example is the shift that’s happening from SOAP to
REST.
Companies will continue to push for the pillars of SOA but the set of
technologies used is going to match the ones that are emerging from the
API world. One clear example is the shift that’s happening from SOAP to
REST.
For the past year, we have seen the same evolution that
happened within the enterprise now happening at a broader level between
companies. APIs will keep increasing exponentially in number, and that
evolution will be accompanied by a new breed to technologies focused on
simplifying the exposing and consuming of APIs.
happened within the enterprise now happening at a broader level between
companies. APIs will keep increasing exponentially in number, and that
evolution will be accompanied by a new breed to technologies focused on
simplifying the exposing and consuming of APIs.
See what MuleSoft is up to in the API space by visiting our Anypoint Platform for APIs page. Or, take a look at our entire Anypoint Platform, consisting of solutions for APIs, SOA, and SaaS. Don’t forget to follow us on Twitter @MuleSoft!
Related posts:Minding the API Hierarchy of Needs
The rising popularity of APIs as an architectural
and development pattern has driven a massive shift in how we think
about application and systems design; but how are we thinking about APIs
themselves? As API adoption increases, we need to learn how to mitigate
risk, maximize utility, and ensure we are building the right APIs for
the right people.
and development pattern has driven a massive shift in how we think
about application and systems design; but how are we thinking about APIs
themselves? As API adoption increases, we need to learn how to mitigate
risk, maximize utility, and ensure we are building the right APIs for
the right people.
Reza Shafii, Director of Product Management, introduces the API Hierarchy of Needs in his InfoQ article, “Minding the API Hierarchy of Needs with RAML and APIkit“, and discusses the advantage of a holistic, broadly inclusive approach to API initiatives.
He argues that it is a tempting, and common, shortcut to jump right to API Management
as a primary concern, without first investing in the ‘meat’ of your API
– design and implementation. In the article, Reza shows how RAML and APIkit
can easily be used to fulfill the two foundational levels of the API
Hierarchy of Needs by helping you first design, and then implement your
APIs in a way that can drive API consistency, quality, and usability.
as a primary concern, without first investing in the ‘meat’ of your API
– design and implementation. In the article, Reza shows how RAML and APIkit
can easily be used to fulfill the two foundational levels of the API
Hierarchy of Needs by helping you first design, and then implement your
APIs in a way that can drive API consistency, quality, and usability.
Read the entire article on InfoQ»
Sunday, June 8, 2014
Saturday, June 7, 2014
Subscribe to:
Posts (Atom)