浏览代码

[PATCH] powerpc: IOMMU SG paranoia

This addresses two items, which are unlikely to be hit if we
trust drivers.

The first is moving a memory barrier below where the vmerged SG count
is passed back, but before the list is set to end.  If those
instructions were reordered, there could be an issue in iommu_unmap_sg().

The second is making sure we terminate the list on the failure case of
iommu_map_sg().  If a driver does not look at the failure return code,
it could pass a ill-formed SG list to iommu_unmap_sg().

Signed-off-by: Jake Moilanen <moilanen@austin.ibm.com>
Acked-by: Olof Johansson <olof@lixom.net>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Jake Moilanen 19 年之前
父节点
当前提交
a958a26486
共有 1 个文件被更改,包括 6 次插入3 次删除
  1. 6 3
      arch/powerpc/kernel/iommu.c

+ 6 - 3
arch/powerpc/kernel/iommu.c

@@ -334,9 +334,6 @@ int iommu_map_sg(struct device *dev, struct iommu_table *tbl,
 
 	spin_unlock_irqrestore(&(tbl->it_lock), flags);
 
-	/* Make sure updates are seen by hardware */
-	mb();
-
 	DBG("mapped %d elements:\n", outcount);
 
 	/* For the sake of iommu_unmap_sg, we clear out the length in the
@@ -347,6 +344,10 @@ int iommu_map_sg(struct device *dev, struct iommu_table *tbl,
 		outs->dma_address = DMA_ERROR_CODE;
 		outs->dma_length = 0;
 	}
+
+	/* Make sure updates are seen by hardware */
+	mb();
+
 	return outcount;
 
  failure:
@@ -358,6 +359,8 @@ int iommu_map_sg(struct device *dev, struct iommu_table *tbl,
 			npages = (PAGE_ALIGN(s->dma_address + s->dma_length) - vaddr)
 				>> PAGE_SHIFT;
 			__iommu_free(tbl, vaddr, npages);
+			s->dma_address = DMA_ERROR_CODE;
+			s->dma_length = 0;
 		}
 	}
 	spin_unlock_irqrestore(&(tbl->it_lock), flags);