We have a similar setup but a bit more complex.
Our infrastructure is in 2 datacenters and we have Firewalls in those datacenters to our neighborhood sites. Each datacenters is connected to peripheral sites with MPLS lines going through our firewalls so in case one firewall/router don't work we want the connection to pass through the other datacenters routers/firewalls.
Our 2 datacenters are configured with Nexus 7K in each and we decided to use BGP to pass routing information between sites and to peripheral sites.
Our L3 public VLAN are configured in both sites, with the VLAN at the site hosting active servers configured in an UP state and the same VLAN (with the same ip on the interface) configured DOWN at the DR sites (we have many VLAN active on one site or the other).
Nexus 7k have a bgp session between them and each of them with the firewall at the local site. The firewalls have bgp session with MPLS routers. Servers have a default route to Nexus Switches. The "preferred" route from external site will be the one with the smallest path (a server in site1 will be reachable with 2 hop using the site1 routers and 3 hop using the site2 router). In case of a fault of MPLS routers or firewalls at site1 the connection to site1 will pass through site2 and viceversa. In case of failure of the connection between sites the connection from site1 to site2 and viceversa will go through MPLS network and servers in the 2 sites will be able to communicate.
In each site we also have a separate L2 network "not routed" used for VMWare and servers storage NFS access and our storage on that network are configured with exactly the same IP address in this way when we failover a VM with NFS mount (like a Oracle database) it will find the mountpoint and it will be able to use the volumes without any need to change configurations.
We are also configuring, when possible, a single VLAN/Datastore per application so that we will be able to move a single application with just some simple operation (stop vm, snapmirror, disable VLAN at site1 enable VLAN at site2, SRM migrate).
This is probably more then you need but probably leaving out all the routing needs you can do something similar. You can create a different VLAN interconnecting the 2 sites used as a routing VLAN and configure the same servers VLAN in one of the 2 sites as active and the same as inactive on the other site. When you SRM to secondary site you should only activate the VLAN on it to have the same default routers for servers. The only prerequisite is that you should have servers on a specific VLAN only in one of the 2 sites.
I'm not sure (I'm not a network expert) but you can probably also configure something in this way (don't know about Juniper but I think it has those functionality): each sites has it's own ip on the VLAN (like .2 and .3) then you can configure HSRP (HA between switches) between the 2 switch with .1 and set it up so that it normally stay at your primary site. In case of a disaster, switch at site1 will not be available and switch at site2 will takeover .1 so your VM will not need to be reconfigured. I don't know if HSRP will work on geographic 13 ms links or if Juniper support it but it should work. The only problem will be if the networking between sites go down. You will probably end with .1 in each sites but this could not be a problem (VM in each site will find their default router) if you have another kind of backup connection between them.